A polynomial has the form p(x)=cnxn+cn−1xn−1+⋯+c0 with cn=0. It has degree n. If n=0, it is a constant polynomial p(x)=c0. If c0=0 it has degree 0, if c0=0 we define it as having degree −∞.
Let p(x) be a polynomial. If d(x) is a nonzero polynomial, then there are quotient and remainder polynomials q(x) and r(x) such that p(x)=d(x)⋅q(x)+r(x)
where the degree of r(x) is strictly less than d(x). This is polynomial division.
If we choose d(x)=x−λ, then r(x) is a degree zero polynomial, i.e. a constant, and we can see r(x)=p(λ).
If divisor d(x) goes into p(x) evenly, so that r(x)=0, then d(x) is a factor of p(x). Any root of the factor λ∈R such that d(λ)=0 is also a root of p(x) because p(λ)=d(λ)⋅q(λ)=0
If λ is a root of p(x), then x−λ divides p(x) because p(x)=(x−λ)q(x)+p(λ)=(x−λ)q(x)+0=(x−λ)q(x)
because λ is a root.
An irriducible polynomial over a field (e.g. Z,R) is one that cannot be factored into lesser-degree polynomials over that field.
A degree 0 or 1 polynomial is always irriducible over the reals. A degree 2 polynomial is irriducible over the reals iff the discriminant is negative. No degree n polynomial n>3 is irriducible over the reals.
Example 14.1
x4+1 can be factored into (x2+2x+1)(x2−2+1).
This implies that every polynomial can be factored into linear and irriducible quadratic polynomials with real coefficients, uniquely.
Complex numbers are required to factor irriducible quadratics.
Complex Numbers
To solve the problem that there is no real solution to x2+1=0, we declare by fiat that i=−1 exists.
This creates numbers of the form a+bi, with a,b∈R.
A complex number isa number of the form a+bi for a,b∈R. The set of complex numbers is denoted C.
Note R⊂C: theyre all in the form a+0i
We can identity C with R2 by a+bi↔(ab) and draw C as a plane.
Operations on Complex Numbers
Addition
(a+bi)+(c+di)=(a+c)+(b+d)i
Multiplication
(a+bi)(c+di)=ac+adi+bci+bd(−1)=(ac−bd)(ad+bc)i
Complex Conjugation
a+bi=a−bi
Note that this implies for z,w∈C z+w=z+w and zw=z⋅w
Proof of above:
Let z=a+bi and w=c+di z+w=(a+bi)+(c+di)=(a+c)+(b+d)i=(a+c)−(b+d)i=a−bi+c−di=z+w zw=(a+bi)(c+di)=(ac−bd)+(ad+bc)i=(ac−bd)−(ad+bc)i=z⋅w
Absolute Value
∣a+bi∣=a2+b2
Note this is a real number.
Also, ∣z∣=zz since zz=(a+bi)(a−bi)=a2−b2(−1)=a2+b2
and ∣zw∣=∣z∣∣w∣ since ∣zw∣=∣(ac−bd)+(ad+bc)i∣=(ac−bd)2+(ad+bc)2=a2c2+b2d2−2abcd+a2d2+b2c2+2abcd=(a2+b2)(c2+d2)=a2+b2⋅c2+d2=∣z∣∣w∣
Division by a nonzero real number
ca+bi=ca+cbi
Division by a nonzero complex number
wz=wwzw=∣w∣2zw
Note that ∣w∣2 is a real number
Real and Imaginary Part
R(a+bi)=a I(a+bi)=b
Polar Coordinates for Complex Numbers
Any complex number z also has polar coordinates z=∣z∣(cosθ+isinθ)
where θ is the argument of z, denoted θ=arg(z).
Note arg(z)=−arg(z)
When you multiply, complex numbers, you multiply the absolute values and add the arguments. ∣zw∣=∣z∣∣w∣arg(zw)=arg(z)+arg(w)
Fundamental Theorem of Algebra
Fundamental Theorem of Algebra: Every degree n polynomial has exactly n complex roots, counting multiplicity.
In other words, for any degree n polynomial f(x)=xn+an1xn−1+⋯+a1x+a0, there exists (not necessarily distinct) λ1,.,,,λn such that f(x)=(x−λ1)(x−λ2)⋯(x−λn)
If f(x) has real coefficients an λ is a root, then λ is also a root since if f(λ)=0, 0=f(λ)=λn+an−1λn−1+⋯+a1λ+a0=λn+an−1λn−1+⋯+a1λ+a0=f(λ)
So for polynomials with real roots, solutions come in conjugate pairs.
Complex Vector Spaces
We define a complex vector space in the same way we defined vector spaces for R with the same operations, including vector addition, scalar multiplication, matrix operations, etc.
Like Rn, we denote Cn the set of ordered n-tuples of complex numbers Cn={(z1,...,zn)∣z1,...,zn∈C}
They can also be written as a column vector of n complex entries. Cn={⎝⎜⎜⎛z1⋮zn⎠⎟⎟⎞∣z1,...,zn∈C}
Note that since R⊂C, Rn⊂Cn
If v=⎝⎜⎜⎛z1⋮zn⎠⎟⎟⎞∈C, we can write z1=a1+ibi,...,zn=an+ibn with a1,...,an,b1,...,bn∈R
Thus, v=⎝⎜⎜⎛z1⋮zn⎠⎟⎟⎞=⎝⎜⎜⎛a1⋮an⎠⎟⎟⎞+i⎝⎜⎜⎛b1⋮bn⎠⎟⎟⎞, with ⎝⎜⎜⎛a1⋮an⎠⎟⎟⎞,⎝⎜⎜⎛b1⋮bn⎠⎟⎟⎞∈Rn
So, every vector v∈Cn can be uniquely written as a linear combination v=a+ib with a,b∈Rn
Proof
We have shown we can write it, so it remains to show it is unique.
Recall that in v, each zk=ak+ibk for k∈{1,...,n}, where ak,bk∈R are the k-th elements of a and b respectively. Therefore, we must have R(zk)=ak and I(zk)=bk, which is a well-defined function, making a and b unique.
From that, we define R(v)=a and I(v)=b, so that v=R(v)+iI(v)
Some Facts
R(0)=I(0)=0R(v+w)=R(v)+R(w),I(v+w)=I(v)+I(w)R(rv)=rR(v),I(rv)=rI(v),for all r∈RR(iv)=−I(v),I(iv)=R(v)
The fact that v=R(v)+iI(v) and R(v),I(v)∈Rn implies Rn spans Cn over the scalars C. So, if a set S⊆Rn spans Rn over real scalars, then it also spans Cn over complex scalars.
If a1,...,ak∈R is a linearly independent subset of the real vector space Rn, then a1,...,ak is a linearly independent subset of the complex vector space Cn.
Proof
Suppose z1a1+⋯+zkak=0 with z1,...,zk∈C. Then using zi=R(zi)+iI(zi) for i∈{1,...,k}, we have [R(z1)a1+⋯+R(zk)ak]+i[I(z1)a1+⋯+I(zk)ak]=0
Since both R(z1)a1+⋯+R(zk)ak and I(z1)a1+⋯+I(zk)ak are in Rn, we must have R(z1)a1+⋯+R(zk)ak=R(z1a1+⋯+zkak)=R(0)=0 I(z1)a1+⋯+I(zk)ak=I(z1a1+⋯+zkak)=I(0)=0
Since a1,...,ak is linearly independent and R(z1),...,R(zk),I(z1),...,I(zk)∈R, we must have R(z1)=⋯=R(zk)=I(z1)=⋯=I(zk)=0
This implies any basis B=⟨b1,...,bn⟩⊂Rn is also a basis of Cn. In particular, the standard basis En of Rn is also the standard basis of Cn
Complex Linear Maps
If t:Rn→Rm is a linear map, then we can define a map tc:Cn→Cm by tc(v)=t(Rv)+it(Iv)
In particular, tc(v)=t(v) for v∈Rn.
This map tc is linear, and has the same matrix representation in the complex standard bases as the matrix representation of t in the real standard bases.
The map tc is called the complexification of t or t complexified
Proof is relatively simple: prove linearity, prove the matrices are the same.
So, all the results in real vector spaces carry over to complex vector spaces.
Similarity
Recall two matrices H and H^ are matrix equivalent if there are non singular matrices P and Q such that H^=PHQ (e.g. H and H^ represent the same map in different bases)
We now consider the special case of transformations, where the codomain equals the domain, and add the requirement that the basis is also the same.
i.e. representations with respect to B,B and D,D
The matrices T and T^ are similar if there is a nonsingular P such that T^=PTP−1
Note the zero matrix Z and the identity matrix I is similar to only itself since PZP−1=PZ=ZPIP−1=PP−1=I
Similarity is an equivalence relation, since it is a specific case of matrix equivalence. However, the converse is not true.